Goto

Collaborating Authors

 pac-bayesian generalization bound


A PAC-Bayesian Generalization Bound for Equivariant Networks

Neural Information Processing Systems

Equivariant networks capture the inductive bias about the symmetry of the learning task by building those symmetries into the model. In this paper, we study how equivariance relates to generalization error utilizing PAC Bayesian analysis for equivariant networks, where the transformation laws of feature spaces are determined by group representations. By using perturbation analysis of equivariant networks in Fourier domain for each layer, we derive norm-based PAC-Bayesian generalization bounds. The bound characterizes the impact of group size, and multiplicity and degree of irreducible representations on the generalization error and thereby provide a guideline for selecting them. In general, the bound indicates that using larger group size in the model improves the generalization error substantiated by extensive numerical experiments.


A PAC-Bayesian Generalization Bound for Equivariant Networks

Neural Information Processing Systems

Equivariant networks capture the inductive bias about the symmetry of the learning task by building those symmetries into the model. In this paper, we study how equivariance relates to generalization error utilizing PAC Bayesian analysis for equivariant networks, where the transformation laws of feature spaces are deter- mined by group representations. By using perturbation analysis of equivariant networks in Fourier domain for each layer, we derive norm-based PAC-Bayesian generalization bounds. The bound characterizes the impact of group size, and multiplicity and degree of irreducible representations on the generalization error and thereby provide a guideline for selecting them. In general, the bound indicates that using larger group size in the model improves the generalization error substantiated by extensive numerical experiments.


A PAC-Bayesian Generalization Bound for Equivariant Networks

Neural Information Processing Systems

Equivariant networks capture the inductive bias about the symmetry of the learning task by building those symmetries into the model. In this paper, we study how equivariance relates to generalization error utilizing PAC Bayesian analysis for equivariant networks, where the transformation laws of feature spaces are deter- mined by group representations. By using perturbation analysis of equivariant networks in Fourier domain for each layer, we derive norm-based PAC-Bayesian generalization bounds. The bound characterizes the impact of group size, and multiplicity and degree of irreducible representations on the generalization error and thereby provide a guideline for selecting them. In general, the bound indicates that using larger group size in the model improves the generalization error substantiated by extensive numerical experiments.


PAC-Bayesian Generalization Bounds for Knowledge Graph Representation Learning

Lee, Jaejun, Hwang, Minsung, Whang, Joyce Jiyoung

arXiv.org Machine Learning

While a number of knowledge graph representation learning (KGRL) methods have been proposed over the past decade, very few theoretical analyses have been conducted on them. In this paper, we present the first PAC-Bayesian generalization bounds for KGRL methods. To analyze a broad class of KGRL models, we propose a generic framework named ReED (Relation-aware Encoder-Decoder), which consists of a relation-aware message passing encoder and a triplet classification decoder. Our ReED framework can express at least 15 different existing KGRL models, including not only graph neural network-based models such as R-GCN and CompGCN but also shallow-architecture models such as RotatE and ANALOGY. Our generalization bounds for the ReED framework provide theoretical grounds for the commonly used tricks in KGRL, e.g., parameter-sharing and weight normalization schemes, and guide desirable design choices for practical KGRL methods. We empirically show that the critical factors in our generalization bounds can explain actual generalization errors on three real-world knowledge graphs.


PAC-Bayesian Generalization Bounds for Adversarial Generative Models

Mbacke, Sokhna Diarra, Clerc, Florence, Germain, Pascal

arXiv.org Machine Learning

Moreover, models and develop generalization bounds for having generalization bounds not only contributes to the theoretical models based on the Wasserstein distance and understanding of GANs themselves, but also to the the total variation distance. Our first result on understanding of the structure of real-life datasets, if those the Wasserstein distance assumes the instance can be provably approximated by GAN-generated data. In space is bounded, while our second result takes addition, given that GANs are used for data-augmentation advantage of dimensionality reduction. Our results in fields such as medical image classification (see e.g. Frid-naturally apply to Wasserstein GANs and Adar et al., 2018), theoretical guarantees can substantiate Energy-Based GANs, and our bounds provide the soundness of such applications.


A PAC-Bayesian Generalization Bound for Equivariant Networks

Behboodi, Arash, Cesa, Gabriele, Cohen, Taco

arXiv.org Artificial Intelligence

Equivariant networks capture the inductive bias about the symmetry of the learning task by building those symmetries into the model. In this paper, we study how equivariance relates to generalization error utilizing PAC Bayesian analysis for equivariant networks, where the transformation laws of feature spaces are determined by group representations. By using perturbation analysis of equivariant networks in Fourier domain for each layer, we derive norm-based PAC-Bayesian generalization bounds. The bound characterizes the impact of group size, and multiplicity and degree of irreducible representations on the generalization error and thereby provide a guideline for selecting them. In general, the bound indicates that using larger group size in the model improves the generalization error substantiated by extensive numerical experiments.


PAC-Bayesian Generalization Bound on Confusion Matrix for Multi-Class Classification

Morvant, Emilie, Koço, Sokol, Ralaivola, Liva

arXiv.org Machine Learning

In this work, we propose a PAC-Bayes bound for the generalization risk of the Gibbs classifier in the multi-class classification framework. The novelty of our work is the critical use of the confusion matrix of a classifier as an error measure; this puts our contribution in the line of work aiming at dealing with performance measure that are richer than mere scalar criterion such as the misclassification rate. Thanks to very recent and beautiful results on matrix concentration inequalities, we derive two bounds showing that the true confusion risk of the Gibbs classifier is upper-bounded by its empirical risk plus a term depending on the number of training examples in each class. To the best of our knowledge, this is the first PAC-Bayes bounds based on confusion matrices.